This paper presents small, fast, powerful auto-regressive language models called Paramanu for 10 Indian languages across 5 scripts, ranging from 13.29M to 367.5M parameters. They outperform larger models on text generation while using less compute. A novel tokenizer and comparable corpora grouping enabled multilingual transfer. Models were pretrained on news, books, etc...